woman and man
Consultant for AI applications (f/m/x) at Helmholtz Zentrum München - Neuherberg near Munich
Helmholtz Munich is a research center with the mission to discover personalized medical solutions for environmentally triggered diseases to promote a healthier society in a rapidly changing world. Germany's largest research organization, the Helmholtz Association, launched Helmholtz AI: This dedicated interdisciplinary platform develops and promotes applied Artificial Intelligence (AI) methods for the Association's main research fields (Health, Energy, Earth and Environment, Information, Space, Matter) in collaboration with its external and university partners. Its central unit operates at Helmholtz Munich in Munich. Our mission is to enable research scientists to leverage AI methods optimally. The Munich consultant team is focusing on Health and collaborates with researchers from the whole Association on projects on cancer and infection research, molecular medicine, as well as neurodegenerative and environmental diseases.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (1.00)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.42)
Generation equality: Empowering and giving visibility to women in robotics
On March 8, International Women's Day (IWD) we celebrate the political, socioeconomic and cultural achievements of women and the women right's movement towards gender equality. "Whilst the social and political rights of women are greater in some places than others, there is no country where gender equality has been achieved" says Mary Evans, professor at the London School of Economics and Political Science in her book "The persistence of gender inequality" (Polity Press 2017). In 2022 this situation has not changed either globally or at the European level as indicated in the EU Gender Equality index for 2020 where the average of the EU is 67.4% and the maximum is Sweden with 83.8%. Although there has been a clear commitment from the European Union on gender equality (specially in innovation and science), there are still structural forms of inequality that must be challenged and changed. It is not the aim of this article to analyse or comment on those, but to show what is being done and is available, especially in the European Union, for us to contribute as individuals and as a community towards gender equality in the field of robotics.
- Europe > Sweden (0.25)
- North America > United States (0.05)
- Europe > Romania > București - Ilfov Development Region > Municipality of Bucharest > Bucharest (0.05)
- Law > Civil Rights & Constitutional Law (1.00)
- Education (1.00)
- Government > Regional Government > Europe Government (0.57)
Gender Inequality Persists in Data Science and AI
Results of a survey of data professionals show that about 1 out of 5 are women. Women are paid less than their male counterparts yet both women and men have similar levels of education. Ways of improving gender diversity in the field of data science are offered. Even though women make up about half of the total workforce in the US, those numbers hide the disparities in some occupational domains. As you can see in Figure 1, while women make up about half of the life, physical and social science occupations in the US, they only account for 25% and 17% of the professionals in computer and mathematical occupations and architecture and engineering occupations, respectively.
- Law > Civil Rights & Constitutional Law (0.51)
- Information Technology (0.49)
Predicting cardiovascular risk from national administrative databases using a combined survival analysis and deep learning approach
Barbieri, Sebastiano, Mehta, Suneela, Wu, Billy, Bharat, Chrianna, Poppe, Katrina, Jorm, Louisa, Jackson, Rod
AIMS. This study compared the performance of deep learning extensions of survival analysis models with traditional Cox proportional hazards (CPH) models for deriving cardiovascular disease (CVD) risk prediction equations in national health administrative datasets. METHODS. Using individual person linkage of multiple administrative datasets, we constructed a cohort of all New Zealand residents aged 30-74 years who interacted with publicly funded health services during 2012, and identified hospitalisations and deaths from CVD over five years of follow-up. After excluding people with prior CVD or heart failure, sex-specific deep learning and CPH models were developed to estimate the risk of fatal or non-fatal CVD events within five years. The proportion of explained time-to-event occurrence, calibration, and discrimination were compared between models across the whole study population and in specific risk groups. FINDINGS. First CVD events occurred in 61,927 of 2,164,872 people. Among diagnoses and procedures, the largest 'local' hazard ratios were associated by the deep learning models with tobacco use in women (2.04, 95%CI: 1.99-2.10) and with chronic obstructive pulmonary disease with acute lower respiratory infection in men (1.56, 95%CI: 1.50-1.62). Other identified predictors (e.g. hypertension, chest pain, diabetes) aligned with current knowledge about CVD risk predictors. The deep learning models significantly outperformed the CPH models on the basis of proportion of explained time-to-event occurrence (Royston and Sauerbrei's R-squared: 0.468 vs. 0.425 in women and 0.383 vs. 0.348 in men), calibration, and discrimination (all p<0.0001). INTERPRETATION. Deep learning extensions of survival analysis models can be applied to large health administrative databases to derive interpretable CVD risk prediction equations that are more accurate than traditional CPH models.
- Oceania > Australia > New South Wales > Sydney (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.51)
Discovering and Interpreting Conceptual Biases in Online Communities
Ferrer-Aran, Xavier, van Nuenen, Tom, Criado, Natalia, Such, Jose M.
Language carries implicit human biases, functioning both as a reflection and a perpetuation of stereotypes that people carry with them. Recently, ML-based NLP methods such as word embeddings have been shown to learn such language biases with striking accuracy. This capability of word embeddings has been successfully exploited as a tool to quantify and study human biases. However, previous studies only consider a predefined set of conceptual biases to attest (e.g., whether gender is more or less associated with particular jobs), or just discover biased words without helping to understand their meaning at the conceptual level. As such, these approaches are either unable to find conceptual biases that have not been defined in advance, or the biases they find are difficult to interpret and study. This makes existing approaches unsuitable to discover and interpret biases in online communities, as such communities may carry different biases than those in mainstream culture. This paper proposes a general, data-driven approach to automatically discover and help interpret conceptual biases encoded in word embeddings. We apply this approach to study the conceptual biases present in the language used in online communities and experimentally show the validity and stability of our method.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > Monterey County > Pacific Grove (0.04)
- Asia > India (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
Discovering and Categorising Language Biases in Reddit
Ferrer, Xavier, van Nuenen, Tom, Such, Jose M., Criado, Natalia
We present a data-driven approach using word embeddings to discover and categorise language biases on the discussion platform Reddit. As spaces for isolated user communities, platforms such as Reddit are increasingly connected to issues of racism, sexism and other forms of discrimination. Hence, there is a need to monitor the language of these groups. One of the most promising AI approaches to trace linguistic biases in large textual datasets involves word embeddings, which transform text into high-dimensional dense vectors and capture semantic relations between words. Yet, previous studies require predefined sets of potential biases to study, e.g., whether gender is more or less associated with particular types of jobs. This makes these approaches unfit to deal with smaller and community-centric datasets such as those on Reddit, which contain smaller vocabularies and slang, as well as biases that may be particular to that community. This paper proposes a data-driven approach to automatically discover language biases encoded in the vocabulary of online discourse communities on Reddit. In our approach, protected attributes are connected to evaluative words found in the data, which are then categorised through a semantic analysis system. We verify the effectiveness of our method by comparing the biases we discover in the Google News dataset with those found in previous literature. We then successfully discover gender bias, religion bias, and ethnic bias in different Reddit communities. We conclude by discussing potential application scenarios and limitations of this data-driven bias discovery method.
AI Bias Could Put Women's Lives At Risk - A Challenge For Regulators
When the European Commission released the long awaited white paper "On Artificial Intelligence - A European approach to excellence and trust" on February 19, much of the initial public reaction focused on potential AI regulation further challenging the EU's position in light of fierce technological competition from China and the United States. Few discussed the European Commission's document mention of gender and ethical guidelines. Importantly, the white paper calls for "requirements to take reasonable measures aimed at ensuring that [the] use of AI systems does not lead to outcomes entailing prohibited discrimination." This is not simply about a theoretical approach to discrimination. It is largely also about saving (women's) lives - and ensuring that essential products and services meet the needs of both women and men.
- North America > United States (0.35)
- Asia > China (0.25)
- Europe > Germany (0.17)
- (5 more...)
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.98)
- Law > Civil Rights & Constitutional Law (0.70)
Machine learning shows no difference in angina symptoms between men and women
The symptoms of angina--the pain that occurs in coronary artery disease--do not differ substantially between men and women, according to the results of an unusual new clinical trial led by MIT researchers. The findings could help overturn the prevailing notion that men and women experience angina differently, with men experiencing "typical angina"--pain-type sensations in the chest, for instance--and women experiencing "atypical angina" symptoms such as shortness of breath and pain-type sensations in the non-chest areas such as the arms, back, and shoulders. Instead, it appears that men and women's symptoms are largely the same, say Karthik Dinakar, a research scientist at the MIT Media Lab, and Catherine Kreatsoulas of the Harvard T.H. Chan School of Public Health. Dinakar and his colleagues presented the results of their HERMES angina trial at the European Society of Cardiology's annual congress in September. Their research is one of the first clinical trials accepted at the prestigious conference to use machine learning techniques, which were used to characterize the full range of symptoms experienced by individual patients and to capture nuances in how they described their symptoms in a natural language exchange. The trial included 637 patients in the United States and Canada who had been referred for their first coronary angiogram, the gold-standard test to diagnose coronary artery disease.
- North America > Canada (0.26)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > France > Île-de-France > Paris > Paris (0.05)
- Research Report > Experimental Study (0.91)
- Research Report > New Finding (0.58)
Free AI webinars for women and men launched by Sarah Burnett
In March 2017, Sarah Burnett, chair of BCSWomen, launched the brand new AI Accelerator for BCSWomen in association with BCS AI Special Interest Group (SIG). Based on broadcasts, panel sessions and social media discussions, about Artificial Intelligence (AI), its prime purpose is to make AI more relevant to women and encourage more females into computing. It is free – and open to men as well. We hope women will sign up in their droves to AI Accelerator and see just what opportunities there are. Our first event was a live webcast held on March 30.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Social Media (0.47)
- Information Technology > Communications > Web (0.40)
Research breaks down who people think should die in a car crash
An experiment has investigated human morality and ranked countries based on who they would save in the event of a certain death situation. The findings reveal the value of life varies according to different countries, with French people, for example, were far more likely to save women than men. The four most spared characters in the game are a baby, a little girl, a little boy and a pregnant woman. The game posed difficult ethical decisions such as choosing between the lives of a family of four crossing the road and a group of pensioners going the other way. Quandaries like this will one day be faced by autonomous vehicles that will be programmed with algorithms that place a value on human life.
- Europe > United Kingdom (0.05)
- North America > United States > Massachusetts (0.05)
- Europe > Lithuania (0.05)
- (5 more...)
- Automobiles & Trucks (0.98)
- Transportation > Passenger (0.31)
- Transportation > Ground > Road (0.31)